Okay, let's get started. So today is about parallel and distributed languages. So why
are we looking into these languages? Basically because of the current multi-core revolution.
Everybody needs to program these programs in parallel and potentially distributed as
well even on a single chip. There is a current, there's a lot of research currently going
on on chips that have multiple cores and communicate via message passing. So there is a small network
called network on a chip processors. There's effectively a number of cores and it's communicate
with message passing. So even on these kinds of chips you're going to need languages which
are effectively distributed, but distributed on a single chip. So another way to look at
this is you should program your programs as natural and as close as to the domain in which
you're programming in. If you're programming something related to physics it makes sense
to have a programming language which incorporates some of the same concepts that you have in
your physics problem. Programming in a language that is close as to your problem domain. So
if your program, the problem that you're trying to solve is inherently parallel, it makes
sense to have a parallel language as well. So parallel languages not only make things
faster they can also make it easier to understand because they are closer to what you're trying
to solve. Of course some problems are inherently too big to solve in a single machine with
a single core or use too much memory and so forth. So here's a problem with inherent parallelism.
For example you're trying to simulate some real situation. You have cars that are driving
over roads and you have stop lights and you want to know how many cars could be on my
highway on my roads before a traffic jam occurs. It's very hard to analytically solve such
a problem using queuing theory for example. It's easier to simulate such a situation and
then see if a traffic jam occurs. So how do you program a system such as this? Well you
have cars that are driving independently of each other so the cars are driving independent
and therefore should be simulated in parallel. It fits nicely to a discrete defense simulation
and in such a discrete defense simulation you have clocks which are naturally independent.
Every car is independent, every stop light is independent. There are random events that
you need to generate for all possible permutations of something, do something and all of these
permutations are inherently parallel to each other with no dependencies between each other
and so forth. So there are a number of problems that are inherently parallel where it makes
programming easier if you were to use a parallel programming language. So in such programming
languages you usually have a number of virtual CPUs that you can create on demand or threads
and you have some sort of communication substrate. These processors need to communicate in some
way. So either you have shared memory between the cores or you have distributed memory and
distributed memory does not mean cluster, it can also mean a network ownership. So you
have different paradigms for shared memory programming and distributed memory programming.
Even though there is currently another trend going on where distributed memory programming
is used to program shared memory machines. For example Google Go has some language like
this where you have processors that communicate via message passing even though the machine
itself is using shared memory. Why are we doing this? Because distributed memory programming
via message passing makes a number of problems disappear. That you normally have with shared
memory programming. For example if you communicate via message passing you do not require logs.
Logs are only needed for shared memory programming. We will look at those in a bit. So what kind
of problems can there be when your program is some parallel program? Well you have deadlocks.
So you are symmetrically waiting for each other. We have livelocks where you are symmetrically
backing off from some resource and then trying again. Everybody knows these two problems.
Another problem is load balancing. So every processor should perform the same amount of
work. From a programming language standpoint the first two have programming languages solutions.
If you have a programming language with structures parallelism in a nice way then you can avoid
both deadlocks and livelocks via the programming language. Avoiding load imbalances. That
is something that is very hard to trigger from a programming language. How do you from
Presenters
Zugänglich über
Offener Zugang
Dauer
00:57:54 Min
Aufnahmedatum
2013-06-05
Hochgeladen am
2019-05-09 11:29:03
Sprache
en-US